A cellular automaton (pl. cellular automata, abbrev. CA) is a discrete model studied in computability theory, mathematics, physics, complexity science, theoretical biology and microstructure modeling. It consists of a regular grid of cells, each in one of a finite number of states, such as "On" and "Off" (in contrast to a coupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood (usually including the cell itself) is defined relative to the specified cell. For example, the neighborhood of a cell might be defined as the set of cells a distance of 2 or less from the cell. An initial state (time t=0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. For example, the rule might be that the cell is "On" in the next generation if exactly two of the cells in the neighborhood are "On" in the current generation, otherwise the cell is "Off" in the next generation. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known.
One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. The "neighbors" of a cell are the 8 squares touching it. For such a cell and its neighbors, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model.
It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states, often called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata.
Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighbourhoods differently for these cells. One could say that they have fewer neighbours, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field of partial differential equations is sometimes referred to as periodic boundary conditions.) This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form a torus (doughnut shape). Universes of other dimensions are handled similarly. This is done in order to solve boundary problems with neighborhoods. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cell xit—where t is the time step (vertical), and i is the index (horizontal) in one generation—is {xi−1t−1, xit−1, xi+1t−1}. There will obviously be problems when a neighbourhood on a left border references its upper left cell, which is not in the cellular space, as part of its neighborhood.
Stanisław Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model.[2],[3] As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Ulam suggested that von Neumann develop his design around a mathematical abstraction, such as the one Ulam used to study crystal growth. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a CA with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe. This design is known as the tessellation model, and is called a von Neumann universal constructor.
Also in the 1940s, Norbert Wiener and Arturo Rosenblueth developed a cellular automaton model of excitable media.[4] Their specific motivation was the mathematical description of impulse conduction in cardiac systems. Their original work continues to be cited in modern research publications on cardiac arrhythmia and excitable systems.[5]
In the 1960s, cellular automata are studied as a particular type of dynamical systems and the connection with the mathematical field of symbolic dynamics is established for the first time. In 1969, G. A. Hedlund compiles many results following this point of view[6] in what is still considered as a seminal paper for the mathematical study of cellular automata. One of the most fundamental result is the characterization of the set of global rules of cellular automata as the set of continuous endomorphisms of shift spaces.
In the 1970s a two-state, two-dimensional cellular automaton named Game of Life became very widely known, particularly among the early computing community. Invented by John Conway and popularized by Martin Gardner in a Scientific American article, its rules are as follows: If a black cell has 2 or 3 black neighbors, it stays black. If a black cell has less than 2 or more than 3 black neighbors it becomes white. If a white cell has 3 black neighbors, it becomes black. Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparent randomness and order. One of the most apparent features of the Game of Life is the frequent occurrence of gliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate a universal Turing machine[7]. Possibly because it was viewed as a largely recreational topic, little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules.
In 1969, however, German computer pioneer Konrad Zuse published his book Calculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is the output of a deterministic computation on a giant cellular automaton. This was the first book on what today is called digital physics.
In 1983 Stephen Wolfram published the first of a series of papers systematically investigating a very basic but essentially unknown class of cellular automata, which he terms elementary cellular automata (see below). The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms. Additionally, during this period Wolfram formulated the concepts of intrinsic randomness and computational irreducibility, and suggested that rule 110 may be universal—a fact proved later with the help of Wolfram's research assistant Matthew Cook in the 1990s.
In 2002 Wolfram published a 1280-page text A New Kind of Science, which extensively argues that the discoveries about cellular automata are not isolated facts but are robust and have significance for all disciplines of science. Despite much confusion in the press and academia, the book did not argue for a fundamental theory of physics based on cellular automata, and although it did describe a few specific physical models based on cellular automata, it also provided models based on qualitatively different abstract systems.
The simplest nontrivial CA would be one-dimensional, with two possible states per cell, and a cell's neighbors defined to be the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 23=8 possible patterns for a neighborhood. There are then 28=256 possible rules. These 256 CAs are generally referred to by their Wolfram code, a standard naming convention invented by Stephen Wolfram which gives each rule a number from 0 to 255. A number of papers have analyzed and compared these 256 CAs. The rule 30 and rule 110 CAs are particularly interesting. The images below show the history of each when the starting configuration consists of a 1 (in the center of each image) surrounded by 0's. Each row of pixels represents a generation in the history of the automation, with t=0 being the top row. Each pixel is colored white for 0 and black for 1.
Rule 30 cellular automaton
current pattern | 111 | 110 | 101 | 100 | 011 | 010 | 001 | 000 |
---|---|---|---|---|---|---|---|---|
new state for center cell | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 |
Rule 110 cellular automaton
current pattern | 111 | 110 | 101 | 100 | 011 | 010 | 001 | 000 |
---|---|---|---|---|---|---|---|---|
new state for center cell | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 |
Rule 30 generates exhibits class 3 behavior, meaning even simple input patterns such as that shown lead to chaotic, seemingly random histories.
Rule 110, like the Game of Life, exhibits what Wolfram calls class 4 behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development of A New Kind of Science, as a research assistant to Stephen Wolfram back in 1994, Matthew Cook proved that some of these structures were rich enough to support universality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and one which is difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class 4 systems are inherently likely to be universal. Cook presented his proof at a Santa Fe Institute conference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof to be announced before the publication of A New Kind of Science. In 2004, Cook's proof was finally published in Wolfram's journal Complex Systems (Vol. 15, No. 1), over ten years after Cook came up with it. Rule 110 has been the basis over which some of the smallest universal Turing machines have been built, inspired on the breakthrough concepts that the development of the proof of rule 110 universality produced.
A CA is said to be reversible if for every current configuration of the CA there is exactly one past configuration (preimage). If one thinks of a CA as a function mapping configurations to configurations, reversibility implies that this function is bijective.
For one dimensional CA there are known algorithms for deciding whether a rule is reversible or irreversible [8][9]. For CA of two or more dimensions it has been proved that the reversibility is undecidable for arbitrary rules. The proof by Jarkko Kari is related to the tiling problem by Wang tiles.
Reversible CA are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws of thermodynamics. Such CA have rules specially constructed to be reversible. Such systems have been studied by Tommaso Toffoli, Norman Margolus and others.
For finite CAs that are not reversible, there must exist patterns for which there are no previous states. These patterns are called Garden of Eden patterns. In other words, no pattern exists which will develop into a Garden of Eden pattern.
Several techniques can be used to explicitly construct reversible CA with known inverses. Two common ones are the second order technique and the partitioning technique, both of which involve modifying the definition of a CA in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional CAs with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional CA.
A special class of CAs are totalistic CAs. The state of each cell in a totalistic CA is represented by a number (usually an integer value drawn from a finite set), and the value of a cell at time t depends only on the sum of the values of the cells in its neighborhood (possibly including the cell itself) at time t−1. If the state of the cell at time t does depend on its own state at time t−1 then the CA is properly called outer totalistic, although the distinction is not always made. Conway's Game of Life is an example of an outer totalistic CA with cell values 0 and 1.
A notation exists to describe rulesets of two-state totalistic CAs consisting of an initial indicating the neighbourhood of each cell and sums following the letters S (for survival) and B (for birth) for which those changes occur. In this notation Conway's Game of Life is M:S23/B3. This notation has been extended for non-totalistic CAs, where a letter or letters follow each sum indicating what patterns of neighbours cause survival or birth events.
Stephan Wolfram, in A New Kind of Science and in several papers dating from the mid-1980s, defined four classes into which cellular automata and several other simple computational models can be divided depending on their behavior. While earlier studies in cellular automata tended to try to identify type of patterns for specific rules, Wolfram's classification was the first attempt to classify the rules themselves. In order of complexity the classes are:
These definitions are qualitative in nature and there is some room for interpretation. According to Wolfram,
...with almost any general classification scheme there are inevitably cases which get assigned to one class by one definition and another class by another definition. And so it is with cellular automata: there are occasionally rules...that show some features of one class and some of another.[10]
Recently there has been a keen interest in building decentralized systems, be they sensor networks or more sophisticated micro level structures designed at the network level and aimed at decentralized information processing. The idea of emergent computation came from the need of using distributed systems to do information processing at the global level.[11] The area is still in its infancy, but some people have started taking the idea seriously. Melanie Mitchell who is Professor of Computer Science at Portland State University and also the Santa Fe Institute External professor[12] has been working on the idea of using self-evolving cellular arrays to study emergent computation and distributed information processing.[11] Mitchell and colleagues are using evolutionary computation to program cellular arrays.[13] Computation in decentralized systems is very different from classical systems, where the information is processed at some central location depending on the system’s state. In decentralized system, the information processing occurs in the form of global and local pattern dynamics.
The inspiration for this approach comes from complex natural systems like insect colonies, nervous system and economic systems.[13] The focus of the research is to understand how computation occurs in an evolving decentralized system. In order to model some of the features of these systems and study how they give rise to emergent computation, Mitchell and collaborators at the SFI have applied Genetic Algorithms to evolve patterns in cellular automata. They have been able to show that the GA discovered rules that gave rise to sophisticated emergent computational strategies.[14] Mitchell’s group used a single dimensional binary array where each cell has two neighbors. The array can be thought of as a circle where the first and last cells are neighbors. The evolution of the array was tracked through the number of ones and zeros after each iteration. The results were plotted to show clearly how the network evolved and what sort of emergent computation was visible.
The results produced by Mitchell’s group are interesting, in that a very simple array of cellular automata produced results showing coordination over global scale, fitting the idea of emergent computation. Future work in the area may include more sophisticated models using cellular automata of higher dimensions, which can be used to model complex natural systems.
Rule 30 was originally suggested as a possible Block cipher for use in cryptography (See CA-1.1).
Cellular automata have been proposed for public key cryptography. The one way function is the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. However, the designer of the rule can create it in such a way as to be able to easily invert it. Therefore, it is apparently a trapdoor function, and can be used as a public-key cryptosystem. The security of such systems is not currently known.
There are many possible generalizations of the CA concept.
One way is by using something other than a rectangular (cubic, etc.) grid. For example, if a plane is tiled with regular hexagons, those hexagons could be used as cells. In many cases the resulting cellular automata are equivalent to those with rectangular grids with specially designed neighborhoods and rules.
Also, rules can be probabilistic rather than deterministic. A probabilistic rule gives, for each pattern at time t, the probabilities that the central cell will transition to each possible state at time t+1. Sometimes a simpler rule is used; for example: "The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color." When the rules are represented as computational verb rules (verb rules, for short), we have computational verb cellular networks (CVCN) [15] [16] [17]. In a CVCN, the state of a cell is changed based on verb rules like this: "IF the state of left neighbor increases AND the state of the right neighbor drops dramatically, THEN the center cell increase its state very slowly".
The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used.
The grid can be finite, so that patterns can "fall off" the edge of the universe.
In CA, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself.
There are continuous automata. These are like totalistic CA, but instead of the rule and states being discrete (e.g. a table, using states {0,1,2}), continuous functions are used, and the states become continuous (usually values in [0,1]). The state of a location is a finite number of real numbers. Certain CA can yield diffusion in liquid patterns in this way.
Continuous spatial automata have a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example is reaction-diffusion textures, differential equations proposed by Alan Turing to explain how chemical reactions could create the stripes on zebras and spots on leopards [18]. When these are approximated by CA, such CAs often yield similar patterns. MacLennan [1] considers continuous spatial automata as a model of computation.
There are known examples of continuous spatial automata which exhibit propagating phenomena analogous to gliders in the Game of Life.[19]
Some living things use naturally occurring cellular automata in their functioning.
Patterns of some seashells, like the ones in Conus and Cymbiola genus, are generated by natural CA. The pigment cells reside in a narrow band along the shell's lip. Each cell secretes pigments according to the activating and inhibiting activity of its neighbour pigment cells, obeying a natural version of a mathematical rule. The cell band leaves the colored pattern on the shell as it grows slowly. For example, the widespread species Conus textile bears a pattern resembling the Rule 30 CA described above.
Plants regulate their intake and loss of gases via a CA mechanism. Each stoma on the leaf acts as a cell.[20]
Neural networks can be used as cellular automata, too. The complex moving wave patterns on the skin of cephalopods are a good display of corresponding activation patterns in the animals' brain.[21]
The Belousov-Zhabotinsky reaction is a spatio-temporal chemical oscillator which can be simulated by means of a cellular automaton. In the 1950s A. M. Zhabotinsky (extending the work of B. P. Belousov) discovered that when a thin, homogenous layer of a mixture of malonic acid, acidified bromate and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the "Computer Recreations" section of the August 1988 issue of Scientific American[22], A. K. Dewdney discussed a cellular automaton[23] which was developed by Martin Gerhardt and Heike Schuster of the University of Bielefeld (West Germany). This automaton produces wave patterns resembling those in the Belousov-Zhabotinsky reaction.
CA processors are a physical, not software only, implementation of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, or tessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with the small number of adjoining cells. Cells interact, communicate, directly only with adjoining, adjacent, neighbor cells. No means exists to communicate directly with cells farther away.
One such CA processor array configuration is the systolic array.
Cell interaction can be via electric charge, magnetism, vibration (phonons at quantum scales), or any other physically useful means. This can be done in several ways so no wires are needed between any elements.
This is very unlike processors used in most computers today, von Neumann designs, which are divided into sections with elements that can communicate with distant elements, over wires.
CA have been applied to design error correction codes in the paper "Design of CAECC – Cellular Automata Based Error Correcting Code", by D. Roy Chowdhury, S. Basu, I. Sen Gupta, P. Pal Chaudhuri. The paper defines a new scheme of building SEC-DED codes using CA, and also reports a fast hardware decoder for the code.
As Andrew Ilachinski points out in his Cellular Automata, many scholars with some interest for fundamental issues have raised the question: ‘is universe a CA?’[24] Ilachinsky argues that the importance of this question may be better appreciated with a simple observation. Consider the evolution of Rule 110: if it was some kind of “alien physics”, what would be a reasonable description of the observed patterns?[25] If you don’t know how the images was generated, you would probably end up conjecturing about the movement of some particle-like objects (indeed, physicist Jim Crutchfield made a rigorous mathematical theory out of this idea proving the statistical emergence of “particles” from CA).[26] If this is true, why our world – today described by physics with particle-like objects – couldn’t be a CA at its most fundamental level?
While a complete theory along this line is still to be developed, entertaining and developing this hypothesis led scholars to interesting speculation and fruitful intuitions on how can we make sense of our world within a discrete framework. Marvin Minsky – the AI pioneer – investigated how to understand particle interaction with a four-dimensional CA lattice;[27] Konrad Zuse – the first who actually built a universal PC – developed an irregularly organized lattice to address the question of the information content of particles[28]. More recently, Edward Fredkin exposed what he terms the “finite nature hypothesis”, i.e. the idea that ‘ultimately every quantity of physics, including space and time, will turn out to be discrete and finite.’[29] Fredkin – together with Stephen Wolfram – is probably the stronger proponent of a CA-based physics: both mathematically and philosophically, Fredkin’s ideas on the subject are articulated and interesting.
In recent years, other suggestions along these lines emerged from the literature in non-standard computation. Stephen Wolfram’s New Kind of Science considers CA to be the key to understand a variety of subjects, physics included. The Mathematics Of the Models of Reference – created by iLabs[30] founder Gabriele Rossi and developed with Francesco Berto and Jacopo Tagliabue – features an original 2D/3D universe based on a new “rhombic dodecahedron-based” lattice and a unique rule: this model satisfies universality (it is equivalent to a Turing Machine) and perfect reversibility (a desideratum if one wants to conserve various quantities easily and never lose information), and it comes embedded in a first-order theory allowing computable, qualitative statements on the universe evolution.[31]